In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Once you have completed all the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to HTML, all the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
Photo sharing and photo storage services like to have location data for each photo that is uploaded. With the location data, these services can build advanced features, such as automatic suggestion of relevant tags or automatic photo organization, which help provide a compelling user experience. Although a photo's location can often be obtained by looking at the photo's metadata, many photos uploaded to these services will not have location metadata available. This can happen when, for example, the camera capturing the picture does not have GPS or if a photo's metadata is scrubbed due to privacy concerns.
If no location metadata for an image is available, one way to infer the location is to detect and classify a discernible landmark in the image. Given the large number of landmarks across the world and the immense volume of images that are uploaded to photo sharing services, using human judgement to classify these landmarks would not be feasible.
In this notebook, you will take the first steps towards addressing this problem by building models to automatically predict the location of the image based on any landmarks depicted in the image. At the end of this project, your code will accept any user-supplied image as input and suggest the top k most relevant landmarks from 50 possible landmarks from across the world. The image below displays a potential sample output of your finished project.

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
Note: if you are using the Udacity workspace, YOU CAN SKIP THIS STEP. The dataset can be found in the /data folder and all required Python modules have been installed in the workspace.
Download the landmark dataset.
Unzip the folder and place it in this project's home directory, at the location /landmark_images.
Install the following Python modules:
In this step, you will create a CNN that classifies landmarks. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 20%.
Although 20% may seem low at first glance, it seems more reasonable after realizing how difficult of a problem this is. Many times, an image that is taken at a landmark captures a fairly mundane image of an animal or plant, like in the following picture.
Just by looking at that image alone, would you have been able to guess that it was taken at the Haleakalā National Park in Hawaii?
An accuracy of 20% is significantly better than random guessing, which would provide an accuracy of just 2%. In Step 2 of this notebook, you will have the opportunity to greatly improve accuracy by using transfer learning to create a CNN.
Remember that practice is far ahead of theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
Use the code cell below to create three separate data loaders: one for training data, one for validation data, and one for test data. Randomly split the images located at landmark_images/train to create the train and validation data loaders, and use the images located at landmark_images/test to create the test data loader.
All three of your data loaders should be accessible via a dictionary named loaders_scratch. Your train data loader should be at loaders_scratch['train'], your validation data loader should be at loaders_scratch['valid'], and your test data loader should be at loaders_scratch['test'].
You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
import torch
from torch.utils.data.dataloader import DataLoader
from torchvision.datasets import ImageFolder
from torchvision import transforms
import numpy as np
from torch.utils.data.sampler import SubsetRandomSampler
data_transform = transforms.Compose([
transforms.Resize((256,256)),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 40
num_workers = 4
data_root = "./landmark_images"
train_data= ImageFolder(f'{data_root}/train',transform=data_transform)
test_data = ImageFolder(f'{data_root}/test',transform=data_transform)
# obtain training indices that will be used for validation
valid_size = 0.2
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
loaders_scratch = {'train': DataLoader(train_data,batch_size=batch_size,num_workers=num_workers, sampler=train_sampler),
'valid': DataLoader(train_data,batch_size=batch_size,num_workers=num_workers, sampler=valid_sampler),
'test': DataLoader(test_data,batch_size=batch_size,num_workers=num_workers, shuffle=True)
}
Question 1: Describe your chosen procedure for preprocessing the data.
Answer:
Use the code cell below to retrieve a batch of images from your train data loader, display at least 5 images simultaneously, and label each displayed image with its class name (e.g., "Golden Gate Bridge").
Visualizing the output of your data loader is a great way to ensure that your data loading and preprocessing are working as expected.
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
## TODO: visualize a batch of the train data loader
## the class names can be accessed at the `classes` attribute
## of your dataset object (e.g., `train_dataset.classes`)
dataiter = iter(loaders_scratch['test'])
images, labels = next(dataiter)
images = images.numpy()
print(images.shape)
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, int(40/2), idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx].transpose((1,2,0))), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
(40, 3, 256, 256)
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
# useful variable that tells us whether we should use the GPU
use_cuda = torch.cuda.is_available()
print(use_cuda)
True
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and fill in the function get_optimizer_scratch below.
import torch.nn as nn
## TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
def get_optimizer_scratch(model):
## TODO: select and return an optimizer
return torch.optim.SGD(model.parameters(), lr=0.02, momentum=0.9)
Create a CNN to classify images of landmarks. Use the template in the code cell below.
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
# Inspired by Cifar Excercise notebook
class Net(nn.Module):
## TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
self.conv4 = nn.Conv2d(64, 128, 3, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(128 * 16 * 16, 512)
self.fc2 = nn.Linear(512, 50)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = self.pool(F.relu(self.conv4(x)))
# flatten image input
x = x.view(-1, 128 * 16 * 16)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
#-#-# Do NOT modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
Question 2: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
Answer:
Implement your training algorithm in the code cell below. Save the final model parameters at the filepath stored in the variable save_path.
from pathlib import Path
#Inspired by Cifar Excercise
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
# set the module to training mode
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
if use_cuda:
data, target = data.cuda(), target.cuda()
## TODO: find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - train_loss))
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - train_loss))
# set the model to evaluation mode
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
if use_cuda:
data, target = data.cuda(), target.cuda()
## TODO: update average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - valid_loss))
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: if the validation loss has decreased, save the model at the filepath stored in save_path
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), Path(save_path))
valid_loss_min = valid_loss
return model
Use the code cell below to define a custom weight initialization, and then train with your weight initialization for a few epochs. Make sure that neither the training loss nor validation loss is nan.
Later on, you will be able to see how this compares to training with PyTorch's default weight initialization.
def custom_weight_init(m):
## TODO: implement a weight initialization strategy
classname = m.__class__.__name__
if classname.find('Linear') != -1:
# get the number of the inputs
n = m.in_features
y = (1.0/np.sqrt(n))
m.weight.data.normal_(0, y)
m.bias.data.fill_(0)
#-#-# Do NOT modify the code below this line. #-#-#
model_scratch.apply(custom_weight_init)
model_scratch = train(20, loaders_scratch, model_scratch, get_optimizer_scratch(model_scratch),
criterion_scratch, use_cuda, 'ignore.pt')
Epoch: 1 Training Loss: 3.839951 Validation Loss: 3.754991 Validation loss decreased (inf --> 3.754991). Saving model ... Epoch: 2 Training Loss: 3.661639 Validation Loss: 3.587864 Validation loss decreased (3.754991 --> 3.587864). Saving model ... Epoch: 3 Training Loss: 3.555730 Validation Loss: 3.669670 Epoch: 4 Training Loss: 3.473376 Validation Loss: 3.487628 Validation loss decreased (3.587864 --> 3.487628). Saving model ... Epoch: 5 Training Loss: 3.397828 Validation Loss: 3.341127 Validation loss decreased (3.487628 --> 3.341127). Saving model ... Epoch: 6 Training Loss: 3.249148 Validation Loss: 3.174120 Validation loss decreased (3.341127 --> 3.174120). Saving model ... Epoch: 7 Training Loss: 3.124471 Validation Loss: 3.134801 Validation loss decreased (3.174120 --> 3.134801). Saving model ... Epoch: 8 Training Loss: 2.971799 Validation Loss: 3.038501 Validation loss decreased (3.134801 --> 3.038501). Saving model ... Epoch: 9 Training Loss: 2.888086 Validation Loss: 3.072056 Epoch: 10 Training Loss: 2.771745 Validation Loss: 2.967797 Validation loss decreased (3.038501 --> 2.967797). Saving model ... Epoch: 11 Training Loss: 2.686053 Validation Loss: 2.958207 Validation loss decreased (2.967797 --> 2.958207). Saving model ... Epoch: 12 Training Loss: 2.540701 Validation Loss: 3.014979 Epoch: 13 Training Loss: 2.442457 Validation Loss: 2.922508 Validation loss decreased (2.958207 --> 2.922508). Saving model ... Epoch: 14 Training Loss: 2.297686 Validation Loss: 3.071568 Epoch: 15 Training Loss: 2.204625 Validation Loss: 2.899706 Validation loss decreased (2.922508 --> 2.899706). Saving model ... Epoch: 16 Training Loss: 2.113629 Validation Loss: 2.873308 Validation loss decreased (2.899706 --> 2.873308). Saving model ... Epoch: 17 Training Loss: 1.952417 Validation Loss: 2.926993 Epoch: 18 Training Loss: 1.876344 Validation Loss: 3.035644 Epoch: 19 Training Loss: 1.830008 Validation Loss: 3.049580 Epoch: 20 Training Loss: 1.773654 Validation Loss: 2.975769
Run the next code cell to train your model.
## TODO: you may change the number of epochs if you'd like,
## but changing it is not required
num_epochs = 50
#-#-# Do NOT modify the code below this line. #-#-#
# function to re-initialize a model with pytorch's default weight initialization
def default_weight_init(m):
reset_parameters = getattr(m, 'reset_parameters', None)
if callable(reset_parameters):
m.reset_parameters()
# reset the model parameters
model_scratch.apply(default_weight_init)
# train the model
model_scratch = train(num_epochs, loaders_scratch, model_scratch, get_optimizer_scratch(model_scratch),
criterion_scratch, use_cuda, 'model_scratch.pt')
Epoch: 1 Training Loss: 3.911156 Validation Loss: 3.892248 Validation loss decreased (inf --> 3.892248). Saving model ... Epoch: 2 Training Loss: 3.758773 Validation Loss: 3.656555 Validation loss decreased (3.892248 --> 3.656555). Saving model ... Epoch: 3 Training Loss: 3.594975 Validation Loss: 3.455212 Validation loss decreased (3.656555 --> 3.455212). Saving model ... Epoch: 4 Training Loss: 3.475069 Validation Loss: 3.443555 Validation loss decreased (3.455212 --> 3.443555). Saving model ... Epoch: 5 Training Loss: 3.392001 Validation Loss: 3.361263 Validation loss decreased (3.443555 --> 3.361263). Saving model ... Epoch: 6 Training Loss: 3.290233 Validation Loss: 3.282467 Validation loss decreased (3.361263 --> 3.282467). Saving model ... Epoch: 7 Training Loss: 3.198034 Validation Loss: 3.258862 Validation loss decreased (3.282467 --> 3.258862). Saving model ... Epoch: 8 Training Loss: 3.147372 Validation Loss: 3.165985 Validation loss decreased (3.258862 --> 3.165985). Saving model ... Epoch: 9 Training Loss: 3.022396 Validation Loss: 3.110656 Validation loss decreased (3.165985 --> 3.110656). Saving model ... Epoch: 10 Training Loss: 2.920761 Validation Loss: 3.073354 Validation loss decreased (3.110656 --> 3.073354). Saving model ... Epoch: 11 Training Loss: 2.842058 Validation Loss: 3.018717 Validation loss decreased (3.073354 --> 3.018717). Saving model ... Epoch: 12 Training Loss: 2.753288 Validation Loss: 2.966802 Validation loss decreased (3.018717 --> 2.966802). Saving model ... Epoch: 13 Training Loss: 2.575363 Validation Loss: 2.946825 Validation loss decreased (2.966802 --> 2.946825). Saving model ... Epoch: 14 Training Loss: 2.487954 Validation Loss: 2.948243 Epoch: 15 Training Loss: 2.373865 Validation Loss: 2.848770 Validation loss decreased (2.946825 --> 2.848770). Saving model ... Epoch: 16 Training Loss: 2.208184 Validation Loss: 2.739446 Validation loss decreased (2.848770 --> 2.739446). Saving model ... Epoch: 17 Training Loss: 2.133637 Validation Loss: 2.791631 Epoch: 18 Training Loss: 2.015537 Validation Loss: 2.793869 Epoch: 19 Training Loss: 1.962541 Validation Loss: 2.933445 Epoch: 20 Training Loss: 1.769692 Validation Loss: 2.963996 Epoch: 21 Training Loss: 1.677424 Validation Loss: 2.851582 Epoch: 22 Training Loss: 1.646798 Validation Loss: 2.899681 Epoch: 23 Training Loss: 1.590186 Validation Loss: 3.068650 Epoch: 24 Training Loss: 1.497678 Validation Loss: 3.060565 Epoch: 25 Training Loss: 1.404727 Validation Loss: 3.091210 Epoch: 26 Training Loss: 1.402179 Validation Loss: 3.072458 Epoch: 27 Training Loss: 1.287248 Validation Loss: 3.194790 Epoch: 28 Training Loss: 1.276316 Validation Loss: 3.300292 Epoch: 29 Training Loss: 1.237412 Validation Loss: 3.082579 Epoch: 30 Training Loss: 1.041842 Validation Loss: 3.584587 Epoch: 31 Training Loss: 0.983099 Validation Loss: 3.394620 Epoch: 32 Training Loss: 1.060201 Validation Loss: 3.354680 Epoch: 33 Training Loss: 0.962692 Validation Loss: 3.816305 Epoch: 34 Training Loss: 0.975264 Validation Loss: 3.443087 Epoch: 35 Training Loss: 1.020152 Validation Loss: 3.610831 Epoch: 36 Training Loss: 1.145143 Validation Loss: 3.684378 Epoch: 37 Training Loss: 0.871168 Validation Loss: 3.684939 Epoch: 38 Training Loss: 0.869776 Validation Loss: 3.726541 Epoch: 39 Training Loss: 0.873258 Validation Loss: 3.524417 Epoch: 40 Training Loss: 0.919845 Validation Loss: 3.581633 Epoch: 41 Training Loss: 0.801079 Validation Loss: 4.012008 Epoch: 42 Training Loss: 0.916934 Validation Loss: 4.067515 Epoch: 43 Training Loss: 0.861164 Validation Loss: 3.789524 Epoch: 44 Training Loss: 0.829864 Validation Loss: 3.861095 Epoch: 45 Training Loss: 0.942242 Validation Loss: 3.892734 Epoch: 46 Training Loss: 0.858211 Validation Loss: 4.426855 Epoch: 47 Training Loss: 0.833489 Validation Loss: 3.858918 Epoch: 48 Training Loss: 1.148173 Validation Loss: 3.794178 Epoch: 49 Training Loss: 1.163342 Validation Loss: 3.803262 Epoch: 50 Training Loss: 0.827417 Validation Loss: 3.620650
Run the code cell below to try out your model on the test dataset of landmark images. Run the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 20%.
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
# set the module to evaluation mode
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
Test Loss: 2.716334 Test Accuracy: 31% (396/1250)
You will now use transfer learning to create a CNN that can identify landmarks from images. Your CNN must attain at least 60% accuracy on the test set.
Use the code cell below to create three separate data loaders: one for training data, one for validation data, and one for test data. Randomly split the images located at landmark_images/train to create the train and validation data loaders, and use the images located at landmark_images/test to create the test data loader.
All three of your data loaders should be accessible via a dictionary named loaders_transfer. Your train data loader should be at loaders_transfer['train'], your validation data loader should be at loaders_transfer['valid'], and your test data loader should be at loaders_transfer['test'].
If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
import torch
from torch.utils.data.dataloader import DataLoader
from torchvision.datasets import ImageFolder
from torchvision import transforms
import numpy as np
from torch.utils.data.sampler import SubsetRandomSampler
data_transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20),
transforms.RandomResizedCrop(480,scale=(0.9,1.0)),
transforms.ToTensor(),
transforms.Normalize((0.5,0.5, 0.5), (0.5,0.5, 0.5))
])
batch_size = 10
num_workers = 4
data_root = "./landmark_images"
train_data_transfer= ImageFolder(f'{data_root}/train',transform=data_transform)
test_data_transfer = ImageFolder(f'{data_root}/test',transform=data_transform)
# obtain training indices that will be used for validation
valid_size = 0.2
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
loaders_transfer = {'train': DataLoader(train_data_transfer,batch_size=batch_size,num_workers=num_workers, sampler=train_sampler),
'valid': DataLoader(train_data_transfer,batch_size=batch_size,num_workers=num_workers, sampler=valid_sampler),
'test': DataLoader(test_data_transfer,batch_size=batch_size,num_workers=num_workers)
}
Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and fill in the function get_optimizer_transfer below.
## TODO: select loss function
criterion_transfer = nn.CrossEntropyLoss()
def get_optimizer_transfer(model, lr = 0.001):
## TODO: select and return optimizer
return torch.optim.Adam(model.classifier.parameters(), lr=lr)
Use transfer learning to create a CNN to classify images of landmarks. Use the code cell below, and save your initialized model as the variable model_transfer.
import torchvision.models as models
model_transfer = models.efficientnet_v2_m(weights=models.EfficientNet_V2_M_Weights)
model_transfer
c:\Users\roman.duris\PycharmProjects\nd101-c2-landmarks-starter\venv\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=EfficientNet_V2_M_Weights.IMAGENET1K_V1`. You can also use `weights=EfficientNet_V2_M_Weights.DEFAULT` to get the most up-to-date weights. warnings.warn(msg)
EfficientNet(
(features): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(3, 24, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Sequential(
(0): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0, mode=row)
)
(1): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0035087719298245615, mode=row)
)
(2): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(stochastic_depth): StochasticDepth(p=0.007017543859649123, mode=row)
)
)
(2): Sequential(
(0): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(24, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(96, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.010526315789473686, mode=row)
)
(1): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.014035087719298246, mode=row)
)
(2): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.017543859649122806, mode=row)
)
(3): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.02105263157894737, mode=row)
)
(4): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(48, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.024561403508771933, mode=row)
)
)
(3): Sequential(
(0): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(48, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(192, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.028070175438596492, mode=row)
)
(1): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(80, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(320, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.031578947368421054, mode=row)
)
(2): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(80, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(320, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.03508771929824561, mode=row)
)
(3): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(80, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(320, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.03859649122807018, mode=row)
)
(4): FusedMBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(80, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(320, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.04210526315789474, mode=row)
)
)
(4): Sequential(
(0): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(80, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(320, 320, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=320, bias=False)
(1): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(320, 20, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(20, 320, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(320, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0456140350877193, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.04912280701754387, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.05263157894736842, mode=row)
)
(3): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.056140350877192984, mode=row)
)
(4): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.05964912280701755, mode=row)
)
(5): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.06315789473684211, mode=row)
)
(6): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=640, bias=False)
(1): BatchNorm2d(640, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(640, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 640, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(640, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.06666666666666667, mode=row)
)
)
(5): Sequential(
(0): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(960, 40, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(40, 960, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(960, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.07017543859649122, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0736842105263158, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.07719298245614035, mode=row)
)
(3): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.08070175438596493, mode=row)
)
(4): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.08421052631578949, mode=row)
)
(5): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.08771929824561403, mode=row)
)
(6): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0912280701754386, mode=row)
)
(7): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.09473684210526316, mode=row)
)
(8): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.09824561403508773, mode=row)
)
(9): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.10175438596491229, mode=row)
)
(10): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.10526315789473684, mode=row)
)
(11): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.10877192982456141, mode=row)
)
(12): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.11228070175438597, mode=row)
)
(13): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 176, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(176, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.11578947368421054, mode=row)
)
)
(6): Sequential(
(0): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(176, 1056, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1056, 1056, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=1056, bias=False)
(1): BatchNorm2d(1056, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1056, 44, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(44, 1056, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1056, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1192982456140351, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.12280701754385964, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.12631578947368421, mode=row)
)
(3): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1298245614035088, mode=row)
)
(4): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.13333333333333333, mode=row)
)
(5): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1368421052631579, mode=row)
)
(6): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.14035087719298245, mode=row)
)
(7): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.14385964912280705, mode=row)
)
(8): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1473684210526316, mode=row)
)
(9): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.15087719298245614, mode=row)
)
(10): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1543859649122807, mode=row)
)
(11): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.15789473684210525, mode=row)
)
(12): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.16140350877192985, mode=row)
)
(13): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1649122807017544, mode=row)
)
(14): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.16842105263157897, mode=row)
)
(15): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.17192982456140352, mode=row)
)
(16): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.17543859649122806, mode=row)
)
(17): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 304, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(304, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.17894736842105266, mode=row)
)
)
(7): Sequential(
(0): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(304, 1824, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(1824, 1824, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1824, bias=False)
(1): BatchNorm2d(1824, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1824, 76, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(76, 1824, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(1824, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1824561403508772, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(512, 3072, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(3072, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=3072, bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(3072, 128, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(128, 3072, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(3072, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.18596491228070178, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(512, 3072, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(3072, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=3072, bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(3072, 128, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(128, 3072, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(3072, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.18947368421052632, mode=row)
)
(3): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(512, 3072, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(3072, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=3072, bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(3072, 128, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(128, 3072, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(3072, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.19298245614035087, mode=row)
)
(4): MBConv(
(block): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(512, 3072, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Conv2dNormActivation(
(0): Conv2d(3072, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=3072, bias=False)
(1): BatchNorm2d(3072, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(3072, 128, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(128, 3072, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): Conv2dNormActivation(
(0): Conv2d(3072, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.19649122807017547, mode=row)
)
)
(8): Conv2dNormActivation(
(0): Conv2d(512, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(classifier): Sequential(
(0): Dropout(p=0.3, inplace=True)
(1): Linear(in_features=1280, out_features=1000, bias=True)
)
)
import torchvision.models as models
def get_transfer_model(out_channels):
model_transfer = models.efficientnet_v2_m(weights=models.EfficientNet_V2_M_Weights.IMAGENET1K_V1)
model_transfer
for param in model_transfer.features.parameters():
param.requires_grad = False
n_inputs = model_transfer.classifier[1].in_features
# add last linear layer (n_inputs -> 5 flower classes)
# new layers automatically have requires_grad = True
last_layer = nn.Linear(n_inputs, out_channels)
model_transfer.classifier[1] = last_layer
# if GPU is available, move the model to GPU
if use_cuda:
model_transfer.cuda()
# check to see that your last layer produces the expected number of outputs
print(model_transfer.classifier[1].out_features)
return model_transfer
## TODO: Specify model architecture
model_transfer = get_transfer_model(50)
#-#-# Do NOT modify the code below this line. #-#-#
if use_cuda:
model_transfer = model_transfer.cuda()
50
Question 3: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
Answer:
Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.
# TODO: train the model and save the best model parameters at filepath 'model_transfer.pt'
from pathlib import Path
def train_transfer(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
if use_cuda:
data, target = data.cuda(), target.cuda()
## TODO: find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - train_loss))
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - train_loss))
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
if use_cuda:
data, target = data.cuda(), target.cuda()
## TODO: update average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data.item() - valid_loss))
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: if the validation loss has decreased, save the model at the filepath stored in save_path
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), Path(save_path))
valid_loss_min = valid_loss
return model
train_transfer(20,loaders_transfer,model_transfer,get_optimizer_transfer(model_transfer),criterion_transfer,use_cuda,'model_transfer.pt')
#-#-# Do NOT modify the code below this line. #-#-#
# load the model that got the best validation accuracy
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
Epoch: 1 Training Loss: 2.174039 Validation Loss: 1.289945 Validation loss decreased (inf --> 1.289945). Saving model ... Epoch: 2 Training Loss: 1.154053 Validation Loss: 0.974589 Validation loss decreased (1.289945 --> 0.974589). Saving model ... Epoch: 3 Training Loss: 0.929405 Validation Loss: 0.856674 Validation loss decreased (0.974589 --> 0.856674). Saving model ... Epoch: 4 Training Loss: 0.801523 Validation Loss: 0.808715 Validation loss decreased (0.856674 --> 0.808715). Saving model ... Epoch: 5 Training Loss: 0.703114 Validation Loss: 0.782066 Validation loss decreased (0.808715 --> 0.782066). Saving model ... Epoch: 6 Training Loss: 0.664400 Validation Loss: 0.765011 Validation loss decreased (0.782066 --> 0.765011). Saving model ... Epoch: 7 Training Loss: 0.605638 Validation Loss: 0.731871 Validation loss decreased (0.765011 --> 0.731871). Saving model ... Epoch: 8 Training Loss: 0.567208 Validation Loss: 0.756416 Epoch: 9 Training Loss: 0.549030 Validation Loss: 0.718448 Validation loss decreased (0.731871 --> 0.718448). Saving model ... Epoch: 10 Training Loss: 0.511670 Validation Loss: 0.711560 Validation loss decreased (0.718448 --> 0.711560). Saving model ... Epoch: 11 Training Loss: 0.484430 Validation Loss: 0.709874 Validation loss decreased (0.711560 --> 0.709874). Saving model ... Epoch: 12 Training Loss: 0.474407 Validation Loss: 0.688658 Validation loss decreased (0.709874 --> 0.688658). Saving model ... Epoch: 13 Training Loss: 0.453157 Validation Loss: 0.703066 Epoch: 14 Training Loss: 0.425713 Validation Loss: 0.691303 Epoch: 15 Training Loss: 0.431300 Validation Loss: 0.701844 Epoch: 16 Training Loss: 0.409142 Validation Loss: 0.704883 Epoch: 17 Training Loss: 0.389446 Validation Loss: 0.712505 Epoch: 18 Training Loss: 0.405996 Validation Loss: 0.700811 Epoch: 19 Training Loss: 0.381226 Validation Loss: 0.702917 Epoch: 20 Training Loss: 0.388445 Validation Loss: 0.697011
<All keys matched successfully>
Try out your model on the test dataset of landmark images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Test Loss: 0.657335 Test Accuracy: 82% (1032/1250)
Great job creating your CNN models! Now that you have put in all the hard work of creating accurate classifiers, let's define some functions to make it easy for others to use your classifiers.
Implement the function predict_landmarks, which accepts a file path to an image and an integer k, and then predicts the top k most likely landmarks. You are required to use your transfer learned CNN from Step 2 to predict the landmarks.
An example of the expected behavior of predict_landmarks:
>>> predicted_landmarks = predict_landmarks('example_image.jpg', 3)
>>> print(predicted_landmarks)
['Golden Gate Bridge', 'Brooklyn Bridge', 'Sydney Harbour Bridge']
import os
#get class names for output
dirs = os.listdir('landmark_images/train/')
dirs = list(map(lambda x: x.split('.'),dirs))
dirs = list(map(lambda x : list((int(x[0]),x[1])), dirs))
class_dict = dict(dirs)
class_dict
{0: 'Haleakala_National_Park',
1: 'Mount_Rainier_National_Park',
2: 'Ljubljana_Castle',
3: 'Dead_Sea',
4: 'Wroclaws_Dwarves',
5: 'London_Olympic_Stadium',
6: 'Niagara_Falls',
7: 'Stonehenge',
8: 'Grand_Canyon',
9: 'Golden_Gate_Bridge',
10: 'Edinburgh_Castle',
11: 'Mount_Rushmore_National_Memorial',
12: 'Kantanagar_Temple',
13: 'Yellowstone_National_Park',
14: 'Terminal_Tower',
15: 'Central_Park',
16: 'Eiffel_Tower',
17: 'Changdeokgung',
18: 'Delicate_Arch',
19: 'Vienna_City_Hall',
20: 'Matterhorn',
21: 'Taj_Mahal',
22: 'Moscow_Raceway',
23: 'Externsteine',
24: 'Soreq_Cave',
25: 'Banff_National_Park',
26: 'Pont_du_Gard',
27: 'Seattle_Japanese_Garden',
28: 'Sydney_Harbour_Bridge',
29: 'Petronas_Towers',
30: 'Brooklyn_Bridge',
31: 'Washington_Monument',
32: 'Hanging_Temple',
33: 'Sydney_Opera_House',
34: 'Great_Barrier_Reef',
35: 'Monumento_a_la_Revolucion',
36: 'Badlands_National_Park',
37: 'Atomium',
38: 'Forth_Bridge',
39: 'Gateway_of_India',
40: 'Stockholm_City_Hall',
41: 'Machu_Picchu',
42: 'Death_Valley_National_Park',
43: 'Gullfoss_Falls',
44: 'Trevi_Fountain',
45: 'Temple_of_Heaven',
46: 'Great_Wall_of_China',
47: 'Prague_Astronomical_Clock',
48: 'Whitby_Abbey',
49: 'Temple_of_Olympian_Zeus'}
class_dict = {0: 'Haleakala_National_Park',
1: 'Mount_Rainier_National_Park',
2: 'Ljubljana_Castle',
3: 'Dead_Sea',
4: 'Wroclaws_Dwarves',
5: 'London_Olympic_Stadium',
6: 'Niagara_Falls',
7: 'Stonehenge',
8: 'Grand_Canyon',
9: 'Golden_Gate_Bridge',
10: 'Edinburgh_Castle',
11: 'Mount_Rushmore_National_Memorial',
12: 'Kantanagar_Temple',
13: 'Yellowstone_National_Park',
14: 'Terminal_Tower',
15: 'Central_Park',
16: 'Eiffel_Tower',
17: 'Changdeokgung',
18: 'Delicate_Arch',
19: 'Vienna_City_Hall',
20: 'Matterhorn',
21: 'Taj_Mahal',
22: 'Moscow_Raceway',
23: 'Externsteine',
24: 'Soreq_Cave',
25: 'Banff_National_Park',
26: 'Pont_du_Gard',
27: 'Seattle_Japanese_Garden',
28: 'Sydney_Harbour_Bridge',
29: 'Petronas_Towers',
30: 'Brooklyn_Bridge',
31: 'Washington_Monument',
32: 'Hanging_Temple',
33: 'Sydney_Opera_House',
34: 'Great_Barrier_Reef',
35: 'Monumento_a_la_Revolucion',
36: 'Badlands_National_Park',
37: 'Atomium',
38: 'Forth_Bridge',
39: 'Gateway_of_India',
40: 'Stockholm_City_Hall',
41: 'Machu_Picchu',
42: 'Death_Valley_National_Park',
43: 'Gullfoss_Falls',
44: 'Trevi_Fountain',
45: 'Temple_of_Heaven',
46: 'Great_Wall_of_China',
47: 'Prague_Astronomical_Clock',
48: 'Whitby_Abbey',
49: 'Temple_of_Olympian_Zeus'}
from PIL import Image
import torch
## TODO: return the names of the top k landmarks predicted by the transfer learned CNN
def predict_landmarks(img_path, k):
img_transform = transforms.Compose([
transforms.Resize((480,480)),
transforms.ToTensor(),
transforms.Normalize((0.5,0.5, 0.5), (0.5,0.5, 0.5))
])
img = Image.open(img_path)
img = img_transform(img)
model = get_transfer_model(50)
model.load_state_dict(torch.load('model_transfer.pt'))
if use_cuda:
model = model.cuda()
model.eval()
with torch.no_grad():
if use_cuda:
img = img.cuda()
out = model(img.unsqueeze(0))
# print(out)
topk = torch.topk(out,k,dim=1)
np_indices = list(topk[1].cpu().numpy()[0])
# print(np_indices)
return [class_dict[x] for x in np_indices]
# test on a sample image
predict_landmarks('images/test/09.Golden_Gate_Bridge/190f3bae17c32c37.jpg', 5)
50
['Golden_Gate_Bridge', 'Forth_Bridge', 'Niagara_Falls', 'Brooklyn_Bridge', 'Sydney_Opera_House']
In the code cell below, implement the function suggest_locations, which accepts a file path to an image as input, and then displays the image and the top 3 most likely landmarks as predicted by predict_landmarks.
Some sample output for suggest_locations is provided below, but feel free to design your own user experience!

def suggest_locations(img_path):
# get landmark predictions
predicted_landmarks = predict_landmarks(img_path, 3)
## TODO: display image and display landmark predictions
import matplotlib.pyplot as plt
img = plt.imread(Path(img_path))
plt.imshow(img)
plt.text(0,img.shape[0] + 100,f'Is this picture of the {", ".join(list(map(lambda x: x.replace("_"," "),predicted_landmarks[:-1])))} or {predicted_landmarks[-1].replace("_"," ")} ?')
plt.show()
# test on a sample image
suggest_locations('images/test/09.Golden_Gate_Bridge/190f3bae17c32c37.jpg')
50
Test your algorithm by running the suggest_locations function on at least four images on your computer. Feel free to use any images you like.
Question 4: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
Answer: (Three possible points for improvement)
## TODO: Execute the `suggest_locations` function on
## at least 4 images on your computer.
## Feel free to use as many code cells as needed.
import os
root = Path('landmark_images/test')
dirs = [x for x in root.iterdir() if x.is_dir()]
images = []
for i in dirs:
images.extend([x for x in i.iterdir() if x.is_file()])
for i in range(0,len(images),11):
print(images[i])
suggest_locations(images[i])
landmark_images\test\00.Haleakala_National_Park\042517e40d998160.jpg 50
landmark_images\test\00.Haleakala_National_Park\329b1be89952a6ea.jpg 50
landmark_images\test\00.Haleakala_National_Park\79b4dc3771b2d75e.jpg 50
landmark_images\test\01.Mount_Rainier_National_Park\359e5588b6045ee8.jpg 50
landmark_images\test\01.Mount_Rainier_National_Park\761daa5eded76313.jpg 50
landmark_images\test\02.Ljubljana_Castle\1bafa4266447ca3c.jpg 50
landmark_images\test\02.Ljubljana_Castle\6eafeab9c5b50e93.jpg 50
landmark_images\test\03.Dead_Sea\0b5870c7c410cd37.jpg 50
landmark_images\test\03.Dead_Sea\3e61918614c39f96.jpg 50
landmark_images\test\03.Dead_Sea\7e0f9f4f14eb9af1.jpg 50
landmark_images\test\04.Wroclaws_Dwarves\428e988c4b8d9d3e.jpg 50
landmark_images\test\04.Wroclaws_Dwarves\76f6e138ec02d56c.jpg 50
landmark_images\test\05.London_Olympic_Stadium\2b924dbe0974d3ea.jpg 50
landmark_images\test\05.London_Olympic_Stadium\63a2a1d2880a8b9c.jpg 50
landmark_images\test\06.Niagara_Falls\0c2aef04fe14c796.jpg 50
landmark_images\test\06.Niagara_Falls\428a5466eb258107.jpg 50
landmark_images\test\07.Stonehenge\04d1a195c7e6c899.jpg 50
landmark_images\test\07.Stonehenge\4169404dc415eb89.jpg 50
landmark_images\test\07.Stonehenge\7b38a3fa5b58466c.jpg 50
landmark_images\test\08.Grand_Canyon\2f64a93067e8eb13.jpg 50
landmark_images\test\08.Grand_Canyon\6b659b399ccd04ad.jpg 50
landmark_images\test\09.Golden_Gate_Bridge\1bc7a7f05288153b.jpg 50
landmark_images\test\09.Golden_Gate_Bridge\5ed314bab8075930.jpg 50
landmark_images\test\10.Edinburgh_Castle\19eab04e683dfe4a.jpg 50
landmark_images\test\10.Edinburgh_Castle\56fe194601f8ae9c.jpg 50
landmark_images\test\11.Mount_Rushmore_National_Memorial\0855bcdafebf7158.jpg 50
landmark_images\test\11.Mount_Rushmore_National_Memorial\3e533c14e48e0de3.jpg 50
landmark_images\test\11.Mount_Rushmore_National_Memorial\78f9654319180e1b.jpg 50
landmark_images\test\12.Kantanagar_Temple\2c8606adee9504a5.jpg 50
landmark_images\test\12.Kantanagar_Temple\61be51b7ec39e372.jpg 50
landmark_images\test\13.Yellowstone_National_Park\305775082f1b7f02.jpg 50
landmark_images\test\13.Yellowstone_National_Park\51a0bb6d380b6fef.jpg 50
landmark_images\test\14.Terminal_Tower\1a42461138f606c9.jpg 50
landmark_images\test\14.Terminal_Tower\3eee8eab1d93207a.jpg 50
landmark_images\test\14.Terminal_Tower\7dffcaf9f66d3fc2.jpg 50
landmark_images\test\15.Central_Park\3b005ec4bf8e5b85.jpg 50
landmark_images\test\15.Central_Park\6efa9ed216cfaca5.jpg 50
landmark_images\test\16.Eiffel_Tower\26f82dab964ef649.jpg 50
landmark_images\test\16.Eiffel_Tower\58d099b15ee74b73.jpg 50
landmark_images\test\17.Changdeokgung\0b95751e4ccbbd19.jpg 50
landmark_images\test\17.Changdeokgung\5856e204147c49fa.jpg 50
landmark_images\test\18.Delicate_Arch\0a644b21cc4f7eb5.jpg 50
landmark_images\test\18.Delicate_Arch\2c630f04fcd24719.jpg 50
landmark_images\test\18.Delicate_Arch\774e3f8f7a9c3604.jpg 50
landmark_images\test\19.Vienna_City_Hall\33fdae363340e364.jpg 50
landmark_images\test\19.Vienna_City_Hall\56718759419f44c9.jpg 50
landmark_images\test\20.Matterhorn\1876d4590a15fc46.jpg 50
landmark_images\test\20.Matterhorn\4fba1da3235d07d6.jpg 50
landmark_images\test\21.Taj_Mahal\14a11ddef5fb84af.jpg 50
landmark_images\test\21.Taj_Mahal\5469792e0b15c65f.jpg 50
landmark_images\test\22.Moscow_Raceway\01969b4e5e396b31.jpg 50
landmark_images\test\22.Moscow_Raceway\44f1ca5a50e7ce91.jpg 50
landmark_images\test\22.Moscow_Raceway\6a87c4ef304dc4b3.jpg 50
landmark_images\test\23.Externsteine\2ddb9fd370c9eb08.jpg 50
landmark_images\test\23.Externsteine\664f1b750e82782f.jpg 50
landmark_images\test\24.Soreq_Cave\2f6b1b995778b892.jpg 50
landmark_images\test\24.Soreq_Cave\581cfaa4280a860d.jpg 50
landmark_images\test\25.Banff_National_Park\182da82f82e6a787.jpg 50
landmark_images\test\25.Banff_National_Park\4095e61a71b4e958.jpg 50
landmark_images\test\25.Banff_National_Park\7f1242fc19c69e0a.jpg 50
landmark_images\test\26.Pont_du_Gard\460f9442170554e6.jpg 50
landmark_images\test\26.Pont_du_Gard\6d33150d95313039.jpg 50
landmark_images\test\27.Seattle_Japanese_Garden\24621d0983313599.jpg 50
landmark_images\test\27.Seattle_Japanese_Garden\682139f4cc6412be.jpg 50
landmark_images\test\28.Sydney_Harbour_Bridge\16c0f11519d5a2b5.jpg 50
landmark_images\test\28.Sydney_Harbour_Bridge\4e25a44e02916049.jpg 50
landmark_images\test\29.Petronas_Towers\0ae950a45582e3a1.jpg 50
landmark_images\test\29.Petronas_Towers\3dfb07dbf222ed0a.jpg 50
landmark_images\test\29.Petronas_Towers\7a60edac1920a106.jpg 50
landmark_images\test\30.Brooklyn_Bridge\2b2061fbe0abcb7c.jpg 50
landmark_images\test\30.Brooklyn_Bridge\76b3106b046d3f45.jpg 50
landmark_images\test\31.Washington_Monument\1d18a23956807778.jpg 50
landmark_images\test\31.Washington_Monument\3f92ba97b309ced8.jpg 50
landmark_images\test\32.Hanging_Temple\1ce97b98d1313aaf.jpg 50
landmark_images\test\32.Hanging_Temple\6b4607c002f7f621.jpg 50
landmark_images\test\33.Sydney_Opera_House\08087d98660bdbd8.jpg 50
landmark_images\test\33.Sydney_Opera_House\2c89aedd9e0ee12a.jpg 50
landmark_images\test\33.Sydney_Opera_House\72be97459cedc17c.jpg 50
landmark_images\test\34.Great_Barrier_Reef\4a0ea6d00fffc5de.jpg 50
landmark_images\test\34.Great_Barrier_Reef\685ce33c2c8b0178.jpg 50
landmark_images\test\35.Monumento_a_la_Revolucion\1c793b20cdbb8b11.jpg 50
landmark_images\test\35.Monumento_a_la_Revolucion\57ae5d0f31394c5a.jpg 50
landmark_images\test\36.Badlands_National_Park\0e3c975ccbf0442a.jpg 50
landmark_images\test\36.Badlands_National_Park\4dde9187cb31166e.jpg 50
landmark_images\test\36.Badlands_National_Park\7bd45178170d75c6.jpg 50
landmark_images\test\37.Atomium\2ac5b8d85225d0f9.jpg 50
landmark_images\test\37.Atomium\7657e4e93df87dca.jpg 50
landmark_images\test\38.Forth_Bridge\3747ee03a240fa8b.jpg 50
landmark_images\test\38.Forth_Bridge\6183bc69574d7045.jpg 50
landmark_images\test\39.Gateway_of_India\1e7955ba85230d5c.jpg 50
landmark_images\test\39.Gateway_of_India\58cc79f7c256fe4f.jpg 50
landmark_images\test\40.Stockholm_City_Hall\13b55db86563dac5.jpg 50
landmark_images\test\40.Stockholm_City_Hall\446cfcfa4d1d852d.jpg 50
landmark_images\test\40.Stockholm_City_Hall\7ded0527ac0c860b.jpg 50
landmark_images\test\41.Machu_Picchu\2f0f11e947907961.jpg 50
landmark_images\test\41.Machu_Picchu\529aeaecb9ba4257.jpg 50
landmark_images\test\42.Death_Valley_National_Park\1a8bf9cb96818d8d.jpg 50
landmark_images\test\42.Death_Valley_National_Park\5a4dca33b5a8b74f.jpg 50
landmark_images\test\43.Gullfoss_Falls\1ebc2ece28e1cbad.jpg 50
landmark_images\test\43.Gullfoss_Falls\5c1d0b53fce01610.jpg 50
landmark_images\test\44.Trevi_Fountain\097ff54ab4271a6f.jpg 50
landmark_images\test\44.Trevi_Fountain\41c52ea1d3b7593a.jpg 50
landmark_images\test\44.Trevi_Fountain\791981db7256a47d.jpg 50
landmark_images\test\45.Temple_of_Heaven\2af4218fe2a1f900.jpg 50
landmark_images\test\45.Temple_of_Heaven\6356845282dfcde3.jpg 50
landmark_images\test\46.Great_Wall_of_China\25e08232feee7ea3.jpg 50
landmark_images\test\46.Great_Wall_of_China\4a7209de41431e9b.jpg 50
landmark_images\test\47.Prague_Astronomical_Clock\0fcdf206e1e36f05.jpg 50
landmark_images\test\47.Prague_Astronomical_Clock\59f46ae80330519f.jpg 50
landmark_images\test\47.Prague_Astronomical_Clock\7d807d38b49395bb.jpg 50
landmark_images\test\48.Whitby_Abbey\376e025c0da1b4b6.jpg 50
landmark_images\test\48.Whitby_Abbey\75dc03b37fea0ae4.jpg 50
landmark_images\test\49.Temple_of_Olympian_Zeus\34aa0fbd65529602.jpg 50
landmark_images\test\49.Temple_of_Olympian_Zeus\5eaecfb5d906417b.jpg 50